The last decade has witnessed the breakthrough of deep neural networks (DNNs) in many fields. With the increasing depth of DNNs, hundreds of millions of multiply-and-accumulate (MAC) operations need to be executed. To accelerate such operations efficiently, analog in-memory computing platforms based on emerging devices, e.g., resistive RAM (RRAM), have been introduced. These acceleration platforms rely on analog properties of the devices and thus suffer from process variations and noise. Consequently, weights in neural networks configured into these platforms can deviate from the expected values, which may lead to feature errors and a significant degradation of inference accuracy. To address this issue, in this paper, we propose a framework to enhance the robustness of neural networks under variations and noise. First, a modified Lipschitz constant regularization is proposed during neural network training to suppress the amplification of errors propagated through network layers. Afterwards, error compensation is introduced at necessary locations determined by reinforcement learning to rescue the feature maps with remaining errors. Experimental results demonstrate that inference accuracy of neural networks can be recovered from as low as 1.69% under variations and noise back to more than 95% of their original accuracy, while the training and hardware cost are negligible.
translated by 谷歌翻译
如果要成功部署在高风险现实世界应用程序(例如自动驾驶汽车)中,则深层网络应对罕见事件具有强大的核心。在这里,我们研究了深网识别异常姿势对象的能力。我们创建了一个在异常方向上的对象图像的合成数据集,并评估了38个最新且竞争性深网的鲁棒性,用于图像分类。我们表明,对所有测试的网络进行分类仍然是一个挑战,与直立物体显示时,平均准确度下降了29.5%。这种脆弱性在很大程度上不受各种网络设计选择的影响,例如培训损失(例如,有监督与自我监督),架构(例如,卷积网络与变形金刚),数据集模式(例如,图像与图像 - text对) ,以及数据授权方案。但是,在非常大的数据集上训练的网络基本上要优于其他培训,最好的网络测试了$ \ unicode {x2014} $ noisy学生Efficentnet-L2接受了JFT-300m $ \ unicode {x2014} $的训练,只有相对较小的准确率,仅准确14.5百分比不寻常的姿势。然而,对嘈杂学生的失败的视觉检查表明,与人类视觉系统的稳定性存在剩余差距。此外,结合多个对象转换$ \ unicode {x2014} $ 3D旋转并缩放$ \ unicode {x2014} $进一步降低了所有网络的性能。总的来说,我们的结果提供了对深网的鲁棒性的另一种衡量,在现实世界中使用它们时要考虑的重要性很重要。代码和数据集可在https://github.com/amro-kamal/objectpose上找到。
translated by 谷歌翻译